Date: Fri May 24 11:54:07 2019
Scientist: Ran Yin
Sequencing (Waksman): Dibyendu Kumar
Statistics: Davit Sargsyan
Principal Investigator: Ah-Ng Kong
This script was developed using DADA2 Pipeline Tutorial (1.12) with tips and tricks from the University of Maryland Shool of Medicine Institute for Genome Sciences (IGS) Microbiome Analysis Workshop (April 8-11, 2019).
FastQ files were downloaded from Dr. Kumar’s DropBox. A total of 60 files (2 per sample, pair-ended) and 2 metadata files were downloaded.
# Reset working directory to root
cd..
# Navigate to folder where FastQ files will be stored
cd nrf2ubiome/fastq_may2019/
# Download a zip folder with all files
curl -L https://www.dropbox.com/sh/sm9tinm0f5r6y1v/AADjGPRRNiIM7zMSfANDkQjFa?dl=1 > download.zip
# Unzip into the filder
unzip download.zip
# Remove the zip file
rm download.zip
# Navigate out of the FastQ folder
cd nrf2ubiome
# Move metadata files to documents
mv fastq_may2019/sample_name.xlsx docs/
mv fastq_may2019/Kong16Smay_summary.xlsx docs/
NOTE: the file sample_name_16s_may_2019_Kong_Ran_Davit.csv was created by Ran and contains additional information about samples, hence, a more complete metadata file. It is based on the files downloaded above.
# sink(file = "tmp/log_nrf2ubiome_dada2_v2.txt")
# date()
require(knitr)
Loading required package: knitr
require(kableExtra)
Loading required package: kableExtra
# # Increase mmemory size to 64 Gb----
# invisible(utils::memory.limit(65536))
options(stringsAsFactors = FALSE)
# str(knitr::opts_chunk$get())
# # NOTE: the below does not work!
# knitr::opts_chunk$set(echo = FALSE,
# message = FALSE,
# warning = FALSE,
# error = FALSE)
# On Windows set multithread=FALSE----
mt <- TRUE
# # Source: https://benjjneb.github.io/dada2/index.html
# # Installed on J&J Rstudio server on 05/22/2019
# if (!requireNamespace("BiocManager", quietly = TRUE))
# install.packages("BiocManager")
# BiocManager::install(version = "3.8")
# BiocManager::install("dada2", version = "3.8")
# BiocManager::install("phyloseq", version = "3.8")
# Follow the tutorial:
# https://benjjneb.github.io/dada2/tutorial.html
require(data.table)
Loading required package: data.table
data.table 1.12.2 using 18 threads (see ?getDTthreads). Latest news: r-datatable.com
require(dada2)
Loading required package: dada2
Loading required package: Rcpp
require(phyloseq)
Loading required package: phyloseq
require(ggplot2)
Loading required package: ggplot2
library(stringr)
require(DT)
Loading required package: DT
[1] "0A1_S21_L001_R1_001.fastq.gz" "0A1_S21_L001_R2_001.fastq.gz"
[3] "0A2_S22_L001_R1_001.fastq.gz" "0A2_S22_L001_R2_001.fastq.gz"
[5] "0A3_S23_L001_R1_001.fastq.gz" "0A3_S23_L001_R2_001.fastq.gz"
[7] "0B1_S24_L001_R1_001.fastq.gz" "0B1_S24_L001_R2_001.fastq.gz"
[9] "0B2_S25_L001_R1_001.fastq.gz" "0B2_S25_L001_R2_001.fastq.gz"
[11] "0D1_S26_L001_R1_001.fastq.gz" "0D1_S26_L001_R2_001.fastq.gz"
[13] "0D3_S27_L001_R1_001.fastq.gz" "0D3_S27_L001_R2_001.fastq.gz"
[15] "0E2_S28_L001_R1_001.fastq.gz" "0E2_S28_L001_R2_001.fastq.gz"
[17] "0E3_S29_L001_R1_001.fastq.gz" "0E3_S29_L001_R2_001.fastq.gz"
[19] "0E4_S30_L001_R1_001.fastq.gz" "0E4_S30_L001_R2_001.fastq.gz"
[21] "1A1_S1_L001_R1_001.fastq.gz" "1A1_S1_L001_R2_001.fastq.gz"
[23] "1A2_S2_L001_R1_001.fastq.gz" "1A2_S2_L001_R2_001.fastq.gz"
[25] "1A3_S3_L001_R1_001.fastq.gz" "1A3_S3_L001_R2_001.fastq.gz"
[27] "1B1_S4_L001_R1_001.fastq.gz" "1B1_S4_L001_R2_001.fastq.gz"
[29] "1B2_S5_L001_R1_001.fastq.gz" "1B2_S5_L001_R2_001.fastq.gz"
[31] "1D1_S6_L001_R1_001.fastq.gz" "1D1_S6_L001_R2_001.fastq.gz"
[33] "1D3_S7_L001_R1_001.fastq.gz" "1D3_S7_L001_R2_001.fastq.gz"
[35] "1E2_S8_L001_R1_001.fastq.gz" "1E2_S8_L001_R2_001.fastq.gz"
[37] "1E3_S9_L001_R1_001.fastq.gz" "1E3_S9_L001_R2_001.fastq.gz"
[39] "1E4_S10_L001_R1_001.fastq.gz" "1E4_S10_L001_R2_001.fastq.gz"
[41] "2A1_S11_L001_R1_001.fastq.gz" "2A1_S11_L001_R2_001.fastq.gz"
[43] "2A2_S12_L001_R1_001.fastq.gz" "2A2_S12_L001_R2_001.fastq.gz"
[45] "2A3_S13_L001_R1_001.fastq.gz" "2A3_S13_L001_R2_001.fastq.gz"
[47] "2B1_S14_L001_R1_001.fastq.gz" "2B1_S14_L001_R2_001.fastq.gz"
[49] "2B2_S15_L001_R1_001.fastq.gz" "2B2_S15_L001_R2_001.fastq.gz"
[51] "2D1_S16_L001_R1_001.fastq.gz" "2D1_S16_L001_R2_001.fastq.gz"
[53] "2D3_S17_L001_R1_001.fastq.gz" "2D3_S17_L001_R2_001.fastq.gz"
[55] "2E2_S18_L001_R1_001.fastq.gz" "2E2_S18_L001_R2_001.fastq.gz"
[57] "2E3_S19_L001_R1_001.fastq.gz" "2E3_S19_L001_R2_001.fastq.gz"
[59] "2E4_S20_L001_R1_001.fastq.gz" "2E4_S20_L001_R2_001.fastq.gz"
In gray-scale is a heat map of the frequency of each quality score at each base position. The median quality score at each position is shown by the green line, and the quartiles of the quality score distribution by the orange lines. The red line shows the scaled proportion of reads that extend to at least that position (this is more useful for other sequencing technologies, as Illumina reads are typically all the same lenghth, hence the flat red line).
Source: DADA2 Pipeline Tutorial (1.12) NOTE: the reason the quality seems to be low at the beginning is that the program is using moving averages so there are less data points in the beginning. No trimming is needed on the left.
user system elapsed
149.783 17.884 167.734
user system elapsed
156.302 19.160 175.497
The reads were trimmed approximately to the lenght at which the quality score median (the green line) went below 20.
The forward reads were of a very good quiality. Only last 20 bases were trimmed.
THe reverse read were of lower quality and were trimmed at the length of 220 bases.
[1] "0A1" "0A2" "0A3" "0B1" "0B2" "0D1" "0D3" "0E2" "0E3" "0E4" "1A1" "1A2" "1A3" "1B1"
[15] "1B2" "1D1" "1D3" "1E2" "1E3" "1E4" "2A1" "2A2" "2A3" "2B1" "2B2" "2D1" "2D3" "2E2"
[29] "2E3" "2E4"
user system elapsed
2258.665 152.333 118.018
used (Mb) gc trigger (Mb) max used (Mb)
Ncells 6854039 366.1 13010428 694.9 11731517 626.6
Vcells 11763342 89.8 48897792 373.1 79502919 606.6
NOTE: parameter learning is computationally intensive, so by default the learnErrors function uses only a subset of the data (the first 1M reads). If the plotted error model does not look like a good fit, try increasing the nreads parameter to see if the fit improves.
130706240 total bases in 466808 reads from 2 samples will be used for learning the error rates.
user system elapsed
30271.536 522.956 1496.506
102697760 total bases in 466808 reads from 2 samples will be used for learning the error rates.
user system elapsed
19882.888 561.425 1493.000
NOTE: for larger datasets (exceeding available RAM) process samples one-by-one. See DADA2 Workflow on Big Data.
user system elapsed
81.365 16.617 97.964
user system elapsed
71.576 12.134 83.693
$`0A1_S21_L001_R1_001.fastq.gz`
derep-class: R object describing dereplicated sequencing reads
$uniques: 233099 reads in 59075 unique sequences
Sequence lengths: min=280, median=280, max=280
$quals: Quality matrix dimension: 59075 280
Consensus quality scores: min=7, median=38, max=38
$map: Map from reads to unique sequences: 31197 55999 34020 40514 209 ...
$`0A2_S22_L001_R1_001.fastq.gz`
derep-class: R object describing dereplicated sequencing reads
$uniques: 233709 reads in 67691 unique sequences
Sequence lengths: min=280, median=280, max=280
$quals: Quality matrix dimension: 67691 280
Consensus quality scores: min=7, median=38, max=38
$map: Map from reads to unique sequences: 32962 31334 46702 23 33768 ...
$`0A3_S23_L001_R1_001.fastq.gz`
derep-class: R object describing dereplicated sequencing reads
$uniques: 217299 reads in 59738 unique sequences
Sequence lengths: min=280, median=280, max=280
$quals: Quality matrix dimension: 59738 280
Consensus quality scores: min=7, median=38, max=38
$map: Map from reads to unique sequences: 34880 4 53282 34881 56607 ...
$`0B1_S24_L001_R1_001.fastq.gz`
derep-class: R object describing dereplicated sequencing reads
$uniques: 199271 reads in 56812 unique sequences
Sequence lengths: min=280, median=280, max=280
$quals: Quality matrix dimension: 56812 280
Consensus quality scores: min=7, median=38, max=38
$map: Map from reads to unique sequences: 44623 37397 15385 24 28683 ...
$`0B2_S25_L001_R1_001.fastq.gz`
derep-class: R object describing dereplicated sequencing reads
$uniques: 213767 reads in 59590 unique sequences
Sequence lengths: min=280, median=280, max=280
$quals: Quality matrix dimension: 59590 280
Consensus quality scores: min=7, median=38, max=38
$map: Map from reads to unique sequences: 25005 30223 34455 18 573 ...
$`0D1_S26_L001_R1_001.fastq.gz`
derep-class: R object describing dereplicated sequencing reads
$uniques: 269543 reads in 72940 unique sequences
Sequence lengths: min=280, median=280, max=280
$quals: Quality matrix dimension: 72940 280
Consensus quality scores: min=7, median=38, max=38
$map: Map from reads to unique sequences: 30244 71783 35669 8803 214 ...
$`0A1_S21_L001_R2_001.fastq.gz`
derep-class: R object describing dereplicated sequencing reads
$uniques: 233099 reads in 105488 unique sequences
Sequence lengths: min=220, median=220, max=220
$quals: Quality matrix dimension: 105488 220
Consensus quality scores: min=7, median=37, max=38
$map: Map from reads to unique sequences: 65238 38144 83 48031 50030 ...
$`0A2_S22_L001_R2_001.fastq.gz`
derep-class: R object describing dereplicated sequencing reads
$uniques: 233709 reads in 112739 unique sequences
Sequence lengths: min=220, median=220, max=220
$quals: Quality matrix dimension: 112739 220
Consensus quality scores: min=7, median=37, max=38
$map: Map from reads to unique sequences: 90410 41432 25474 92 674 ...
$`0A3_S23_L001_R2_001.fastq.gz`
derep-class: R object describing dereplicated sequencing reads
$uniques: 217299 reads in 107791 unique sequences
Sequence lengths: min=220, median=220, max=220
$quals: Quality matrix dimension: 107791 220
Consensus quality scores: min=7, median=37, max=38
$map: Map from reads to unique sequences: 32628 51 72368 35974 95062 ...
$`0B1_S24_L001_R2_001.fastq.gz`
derep-class: R object describing dereplicated sequencing reads
$uniques: 199271 reads in 96551 unique sequences
Sequence lengths: min=220, median=220, max=220
$quals: Quality matrix dimension: 96551 220
Consensus quality scores: min=7, median=37, max=38
$map: Map from reads to unique sequences: 58569 21816 22308 46 41388 ...
$`0B2_S25_L001_R2_001.fastq.gz`
derep-class: R object describing dereplicated sequencing reads
$uniques: 213767 reads in 94566 unique sequences
Sequence lengths: min=220, median=220, max=220
$quals: Quality matrix dimension: 94566 220
Consensus quality scores: min=7, median=37, max=38
$map: Map from reads to unique sequences: 88831 33547 2297 14860 1334 ...
$`0D1_S26_L001_R2_001.fastq.gz`
derep-class: R object describing dereplicated sequencing reads
$uniques: 269543 reads in 111158 unique sequences
Sequence lengths: min=220, median=220, max=220
$quals: Quality matrix dimension: 111158 220
Consensus quality scores: min=7, median=37, max=38
$map: Map from reads to unique sequences: 35237 105839 86341 94052 911 ...
Notes from IGS Workshop*:
Sample Inference - inferring the sequence variants in each sample.
By default, the dada function processes each sample independently, but pooled processing is available with pool=TRUE and that may give better results for low sampling depths at the cost of increased computation time.
All samples are simultaneously loaded into memory by default. If the datasets approach or exceed available RAM, it is preferable to process samples one-by-one in a streaming fashion: see DADA2 Workflow on Big Data for an example.
Sample 1 - 233099 reads in 59075 unique sequences.
Sample 2 - 233709 reads in 67691 unique sequences.
Sample 3 - 217299 reads in 59738 unique sequences.
Sample 4 - 199271 reads in 56812 unique sequences.
Sample 5 - 213767 reads in 59590 unique sequences.
Sample 6 - 269543 reads in 72940 unique sequences.
Sample 7 - 180994 reads in 55223 unique sequences.
Sample 8 - 134451 reads in 39817 unique sequences.
Sample 9 - 281050 reads in 80690 unique sequences.
Sample 10 - 173556 reads in 45853 unique sequences.
Sample 11 - 155070 reads in 41575 unique sequences.
Sample 12 - 255031 reads in 69397 unique sequences.
Sample 13 - 194217 reads in 51725 unique sequences.
Sample 14 - 134180 reads in 40968 unique sequences.
Sample 15 - 210437 reads in 56124 unique sequences.
Sample 16 - 249059 reads in 75496 unique sequences.
Sample 17 - 178556 reads in 52129 unique sequences.
Sample 18 - 214147 reads in 59888 unique sequences.
Sample 19 - 119020 reads in 34823 unique sequences.
Sample 20 - 127319 reads in 37828 unique sequences.
Sample 21 - 152185 reads in 39742 unique sequences.
Sample 22 - 124137 reads in 32545 unique sequences.
Sample 23 - 153266 reads in 40678 unique sequences.
Sample 24 - 131861 reads in 35591 unique sequences.
Sample 25 - 147366 reads in 40089 unique sequences.
Sample 26 - 128298 reads in 35590 unique sequences.
Sample 27 - 253136 reads in 64675 unique sequences.
Sample 28 - 212124 reads in 62152 unique sequences.
Sample 29 - 192063 reads in 50583 unique sequences.
Sample 30 - 117067 reads in 31281 unique sequences.
user system elapsed
31216.841 512.650 1633.601
Sample 1 - 233099 reads in 105488 unique sequences.
Sample 2 - 233709 reads in 112739 unique sequences.
Sample 3 - 217299 reads in 107791 unique sequences.
Sample 4 - 199271 reads in 96551 unique sequences.
Sample 5 - 213767 reads in 94566 unique sequences.
Sample 6 - 269543 reads in 111158 unique sequences.
Sample 7 - 180994 reads in 88798 unique sequences.
Sample 8 - 134451 reads in 66269 unique sequences.
Sample 9 - 281050 reads in 123894 unique sequences.
Sample 10 - 173556 reads in 71896 unique sequences.
Sample 11 - 155070 reads in 80760 unique sequences.
Sample 12 - 255031 reads in 109779 unique sequences.
Sample 13 - 194217 reads in 89050 unique sequences.
Sample 14 - 134180 reads in 70815 unique sequences.
Sample 15 - 210437 reads in 94276 unique sequences.
Sample 16 - 249059 reads in 116146 unique sequences.
Sample 17 - 178556 reads in 91718 unique sequences.
Sample 18 - 214147 reads in 95385 unique sequences.
Sample 19 - 119020 reads in 58198 unique sequences.
Sample 20 - 127319 reads in 60776 unique sequences.
Sample 21 - 152185 reads in 71308 unique sequences.
Sample 22 - 124137 reads in 62079 unique sequences.
Sample 23 - 153266 reads in 69599 unique sequences.
Sample 24 - 131861 reads in 63348 unique sequences.
Sample 25 - 147366 reads in 70430 unique sequences.
Sample 26 - 128298 reads in 60373 unique sequences.
Sample 27 - 253136 reads in 116542 unique sequences.
Sample 28 - 212124 reads in 106067 unique sequences.
Sample 29 - 192063 reads in 93810 unique sequences.
Sample 30 - 117067 reads in 60932 unique sequences.
user system elapsed
20766.002 486.052 1646.466
user system elapsed
153.305 1.951 155.222
user system elapsed
0.165 0.034 0.200
[1] 30 24043
user system elapsed
6510.554 3.348 196.734
[1] 30 8129
[1] "Chimeras = 43.2%"
NOTE: According to the IGS, denovo chimeras are determined based on most abundant sequencins in a given data. Usually 5-7% of sequences are chimeras. It is much higher in this dataset (> 40%). IGS recommends revisiting the removal of primers, as the ambiguous nucleotides in unremoved primers interfere with chimera identification.
getN <- function(x) {
sum(getUniques(x))
}
track <- cbind(out,
sapply(dadaFs,
getN),
sapply(mergers,
getN),
rowSums(seqtab),
rowSums(seqtab.nochim))
colnames(track) <- c("Raw",
"Filtered",
"Denoised",
"Merged",
"Tabled",
"Non-Chimeras")
rownames(track) <- sample.names
DT::datatable(format(track,
big.mark = ","),
options = list(pageLength = nrow(track)))
IGS suggests the number of merged sequences can potentially be increased by truncating the reads less (truncLen parameter in the filterAndTrim function), specifically, making sure that the truncated reads span the amplicon. This might not be the case here as the remaining reads are relatively long (280 bases for forward and 220 reads for reverse reads).
Write out and save your results thus far:
fc <- file("data_may2019/all_runs_dada2_ASV.fasta")
fltp <- character()
for( i in 1:ncol(seqtab)) {
fltp <- append(fltp,
paste0(">Seq_",
i))
fltp <- append(fltp,
colnames(seqtab)[i])
}
writeLines(fltp,
fc)
close(fc)
head(fltp)
[1] ">Seq_1"
[2] "GTGTCAGCAGCCGCGGTAATACGGGGGATGCAAGCGTTATCCGGATTTATTGGGTTTAAAGGGTGCGTAGGCTGTGAGGTAAGTCAGCGGTGAAATGCCCCCGCTCAACGGGGTGAAGTGCCATTGATACTGCCTTGCTGGAATGCGGATGCCGTGGGAGGAATGTGTGGTGTAGCGGTGAAATGCATAGATATCACACAGAACACCGATTGCGAAGGCATCTCACGAATCCGCTATTGACGCTGATGCACGAAAGCGTGGGTATCAAACAGGATTAGAAACCCCCGTAGTCC"
[3] ">Seq_2"
[4] "GTGTCAGCAGCCGCGGTAATACGGGGGATGCAAGCGTTATCCGGATTTATTGGGTTTAAAGGGTGCGTAGGCTGTGAGGTAAGTCAGCGGTGAAATGCCCCCGCTCAACGGGGTGAAGTGCCATTGATACTGCCTTGCTGGAATGCGGATGCCGTGGGAGGAATGTGTGGTGTAGCGGTGAAATGCATAGATATCACACAGAACACCGATTGCGAAGGCATCTCACGAATCCGCTATTGACGCTGATGCACGAAAGCGTGGGTATCAAACAGGATTAGAAACCCTCGTAGTCC"
[5] ">Seq_3"
[6] "GTGTCAGCAGCCGCGGTAATACGGGGGATGCAAGCGTTATCCGGATTTATTGGGTTTAAAGGGTGCGTAGGCTGTGAGGTAAGTCAGCGGTGAAATGCCCCCGCTCAACGGGGTGAAGTGCCATTGATACTGCCTTGCTGGAATGCGGATGCCGTGGGAGGAATGTGTGGTGTAGCGGTGAAATGCATAGATATCACACAGAACACCGATTGCGAAGGCATCTCACGAATCCGCTATTGACGCTGATGCACGAAAGCGTGGGTATCAAACAGGATTAGAAACCCTTGTAGTCC"
rm(fltp)
gc()
used (Mb) gc trigger (Mb) max used (Mb)
Ncells 9749228 520.7 16376844 874.7 16376844 874.7
Vcells 1162007129 8865.5 1854713728 14150.4 1854699402 14150.3
NOTE: create taxa.RData once, then comment it out and load the R data file to when reruning the code.
# taxa <- assignTaxonomy(seqs = seqtab.nochim,
# refFasta = "tax/silva_nr_v132_train_set.fa",
# multithread = mt)
# save(taxa,
# file = "data_may2019/taxa.RData")
load("data_may2019/taxa.RData")
print(paste("Number of unique references =",
format(nrow(taxa),
big.mark = ",")))
[1] "Number of unique references = 8,129"
DT::datatable(taxa[1:5, ],
rownames = FALSE)
# Keep only the references found in the data
taxa.tmp <- taxa[rownames(taxa) %in% colnames(seqtab.nochim), ]
print(paste("Number of references matched in the data =",
format(nrow(taxa.tmp),
big.mark = ",")))
[1] "Number of references matched in the data = 8,129"
# # Add species (do it once)
# taxa.plus <- addSpecies(taxtab = taxa.tmp,
# refFasta = "tax/silva_species_assignment_v132.fa",
# verbose = TRUE)
# save(taxa.plus,
# file = "data_may2019/taxa.plus.RData")
load("data_may2019/taxa.plus.RData")
dt.otu <- otu_table(seqtab.nochim,
taxa_are_rows = FALSE)
sample_names(dt.otu) <- sample.names
print("Sample names in OTU table")
[1] "Sample names in OTU table"
sample_names(dt.otu)
[1] "0A1" "0A2" "0A3" "0B1" "0B2" "0D1" "0D3" "0E2" "0E3" "0E4" "1A1" "1A2" "1A3" "1B1"
[15] "1B2" "1D1" "1D3" "1E2" "1E3" "1E4" "2A1" "2A2" "2A3" "2B1" "2B2" "2D1" "2D3" "2E2"
[29] "2E3" "2E4"
metadata <- sample_data(dt.meta)
rownames(metadata) <- metadata$SAMPLE_NAME
print("Sample names in metadata")
[1] "Sample names in metadata"
sample_names(metadata)
[1] "1A1" "1A2" "1A3" "1B1" "1B2" "1D1" "1D3" "1E2" "1E3" "1E4" "2A1" "2A2" "2A3" "2B1"
[15] "2B2" "2D1" "2D3" "2E2" "2E3" "2E4" "0A1" "0A2" "0A3" "0B1" "0B2" "0D1" "0D3" "0E2"
[29] "0E3" "0E4"
ps_may2019 <- phyloseq(dt.otu,
metadata,
tax_table(taxa))
# sample_names(ps_may2019)
save(ps_may2019,
file = "data_may2019/ps_may2019.RData")
sessionInfo()
R version 3.5.0 (2018-04-23)
Platform: x86_64-redhat-linux-gnu (64-bit)
Running under: Red Hat Enterprise Linux Server 7.5 (Maipo)
Matrix products: default
BLAS/LAPACK: /usr/lib64/R/lib/libRblas.so
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8
[4] LC_COLLATE=en_US.UTF-8 LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C LC_ADDRESS=C
[10] LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] DT_0.6 stringr_1.4.0 ggplot2_3.1.1 phyloseq_1.26.1
[5] dada2_1.10.1 Rcpp_1.0.1 data.table_1.12.2 kableExtra_1.1.0
[9] knitr_1.23
loaded via a namespace (and not attached):
[1] nlme_3.1-137 bitops_1.0-6 matrixStats_0.54.0
[4] webshot_0.5.1 RColorBrewer_1.1-2 httr_1.4.0
[7] GenomeInfoDb_1.18.2 tools_3.5.0 R6_2.4.0
[10] vegan_2.5-5 lazyeval_0.2.2 BiocGenerics_0.28.0
[13] mgcv_1.8-23 colorspace_1.4-1 permute_0.9-5
[16] ade4_1.7-13 withr_2.1.2 tidyselect_0.2.5
[19] compiler_3.5.0 rvest_0.3.4 Biobase_2.42.0
[22] xml2_1.2.0 DelayedArray_0.8.0 labeling_0.3
[25] scales_1.0.0 readr_1.3.1 digest_0.6.19
[28] Rsamtools_1.34.1 rmarkdown_1.12 XVector_0.22.0
[31] pkgconfig_2.0.2 htmltools_0.3.6 htmlwidgets_1.3
[34] rlang_0.3.4 rstudioapi_0.10 shiny_1.3.2
[37] hwriter_1.3.2 jsonlite_1.6 crosstalk_1.0.0
[40] BiocParallel_1.16.6 dplyr_0.8.1 RCurl_1.95-4.12
[43] magrittr_1.5 GenomeInfoDbData_1.2.0 biomformat_1.10.1
[46] Matrix_1.2-14 munsell_0.5.0 S4Vectors_0.20.1
[49] Rhdf5lib_1.4.3 ape_5.3 stringi_1.4.3
[52] yaml_2.2.0 MASS_7.3-49 SummarizedExperiment_1.12.0
[55] zlibbioc_1.28.0 rhdf5_2.26.2 plyr_1.8.4
[58] grid_3.5.0 promises_1.0.1 parallel_3.5.0
[61] crayon_1.3.4 lattice_0.20-35 Biostrings_2.50.2
[64] splines_3.5.0 multtest_2.38.0 hms_0.4.2
[67] pillar_1.4.0 igraph_1.2.4.1 GenomicRanges_1.34.0
[70] reshape2_1.4.3 codetools_0.2-15 stats4_3.5.0
[73] glue_1.3.1 evaluate_0.13 ShortRead_1.40.0
[76] latticeExtra_0.6-28 RcppParallel_4.4.2 httpuv_1.5.1
[79] foreach_1.4.4 gtable_0.3.0 purrr_0.3.2
[82] assertthat_0.2.1 xfun_0.7 mime_0.6
[85] xtable_1.8-4 later_0.8.0 survival_2.41-3
[88] viridisLite_0.3.0 tibble_2.1.1 iterators_1.0.10
[91] GenomicAlignments_1.18.1 IRanges_2.16.0 cluster_2.0.7-1